Lecture 3 : Scalar and Matrix Concentration

نویسنده

  • Michael Mahoney
چکیده

Here, we will give an aside on probabilities, and in particular on various ways to establish what is know as concentration. Given some information about a distribution, e.g., its mean or its variance or information about higher moments, there are various ways to establish bounds on the tails of sums of random variables from that distribution. That is, there are various ways to establish that estimates are close to their expected value, with high probability. Today and next time, we will cover several of these methods, both in the case where the random variables are scalar or real-valued and when the random variables are matrix-valued. The former can be used to bound that latter, e.g., by bounding every element of the random matrix individually, but the latter often provide tighter bounds in those cases.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Lecture 4 : Concentration and Matrix Multiplication

Today, we will continue with our discussion of scalar and matrix concentration, with a discussion of the matrix analogues of Markov’s, Chebychev’s, and Chernoff’s Inequalities. Then, we will return to bounding the error for our approximating matrix multiplication algorithm. We will start with using Hoeffding-Azuma bounds from last class to get improved Frobenius norm bounds, and then (next time...

متن کامل

Statistical Mechanics and Random Matrices

Statistical Mechanics and Random Matrices 3 1. Introduction 6 2. Motivations 7 3. The different scales; typical results 12 Lecture 1. Wigner matrices and moments estimates 15 1. Wigner's theorem 16 2. Words in several independent Wigner matrices 23 3. Estimates on the largest eigenvalue of Wigner matrices 25 Lecture 2. Gaussian Wigner matrices and Fredholm determinants 27 1. Joint law of the ei...

متن کامل

18 . S 096 : Concentration Inequalities , Scalar and Matrix Versions

These are lecture notes not in final form and will be continuously edited and/or corrected (as I am sure it contains many typos). Please let me know if you find any typo/mistake. Also, I am posting short descriptions of these notes (together with the open problems) on my Blog, see [Ban15a]. Concentration and large deviations inequalities are among the most useful tools when understanding the pe...

متن کامل

Electical Imaging Seminar 9 A Lecture by Bill Lionheart November 2000 1 Anisotropic conductivity

1 Anisotropic conductivity In the isotropic case we have J σ∇φ for a scalar conductivity σ. In the anisotropic ase we interpret sigma as a symmetric matrix, multiplying the column vector ∇φ.

متن کامل

Statistical Regularization and Learning Theory Lecture : 12 Complexity Regularization in Regression

Example 1 To illustrate the distinction between classification and regression, consider a simple, scalar signal plus noise problem. Consider Yi = θ +Wi, i = 1, . . . , n, where θ is a fixed unknown scalar parameter and the Wi are independent, zero-mean, unit variance random variables. Let Ȳ = 1/n ∑n i=1 Yi. Then, according to the Central Limit Theorem, Ȳ is distributed approximately N(θ, 1/n). ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2015